Goto

Collaborating Authors

 relational recurrent neural network


Relational recurrent neural networks

Neural Information Processing Systems

Memory-based neural networks model temporal data by leveraging an ability to remember information for long periods. It is unclear, however, whether they also have an ability to perform complex relational reasoning with the information they remember. Here, we first confirm our intuitions that standard memory architectures may struggle at tasks that heavily involve an understanding of the ways in which entities are connected -- i.e., tasks involving relational reasoning. We then improve upon these deficits by using a new memory module -- a Relational Memory Core (RMC) -- which employs multi-head dot product attention to allow memories to interact. Finally, we test the RMC on a suite of tasks that may profit from more capable relational reasoning across sequential information, and show large gains in RL domains (BoxWorld & Mini PacMan), program evaluation, and language modeling, achieving state-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord datasets.


Reviews: Relational recurrent neural networks

Neural Information Processing Systems

This paper proposes relational recurrent neural networks to incorporate relational reasoning into memory-based neural networks. More specifically, the authors design a new memory module, called relational memory core (RMC), to make memories to interact. Multi-head dot product attention is used for this purpose. Experiments are constructed on a few supervised learning and reinforcement learning tasks, which may profit from more capable relational reasoning across sequential information, to show the effectiveness of the proposed method. The main strengths of this paper are as follows: (1) It is a novel and well motivated idea to design a new deep learning architecture which can perform relational reasoning.


Relational recurrent neural networks

Santoro, Adam, Faulkner, Ryan, Raposo, David, Rae, Jack, Chrzanowski, Mike, Weber, Theophane, Wierstra, Daan, Vinyals, Oriol, Pascanu, Razvan, Lillicrap, Timothy

Neural Information Processing Systems

Memory-based neural networks model temporal data by leveraging an ability to remember information for long periods. It is unclear, however, whether they also have an ability to perform complex relational reasoning with the information they remember. Here, we first confirm our intuitions that standard memory architectures may struggle at tasks that heavily involve an understanding of the ways in which entities are connected -- i.e., tasks involving relational reasoning. We then improve upon these deficits by using a new memory module -- a Relational Memory Core (RMC) -- which employs multi-head dot product attention to allow memories to interact. Finally, we test the RMC on a suite of tasks that may profit from more capable relational reasoning across sequential information, and show large gains in RL domains (BoxWorld & Mini PacMan), program evaluation, and language modeling, achieving state-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord datasets. Papers published at the Neural Information Processing Systems Conference.